心血管疾病(CVD)是全球死亡的第一大原因。尽管有越来越多的证据表明心房颤动(AF)与各种CVD有着密切的关联,但这种心律不齐通常是使用心电图(ECG)诊断的,这是一种无风险,无侵入性和具有成本效益的工具。在任何威胁生命的疾病/疾病发展之前,不断和远程监视受试者的心电图信息迅速诊断和及时对AF进行预处理的潜力。最终,可以降低CVD相关的死亡率。在此手稿中,展示了体现可穿戴心电图设备,移动应用程序和后端服务器的个性化医疗系统的设计和实施。该系统不断监视用户的心电图信息,以提供个性化的健康警告/反馈。用户能够通过该系统与他们的配对健康顾问进行远程诊断,干预措施等。已经评估了实施的可穿戴ECG设备,并显示出极好的一致性(CVRMS = 5.5%),可接受的一致性(CVRMS = CVRMS = CVRMS = 12.1%),可忽略不计的RR间隙错误(<1.4%)。为了提高可穿戴设备的电池寿命,提出了使用ECG信号的准周期特征来实现压缩的有损压缩模式。与公认的架构相比,它在压缩效率和失真方面优于其他模式,并在MIT-BIH数据库中以ECG信号的某个PRD或RMSE达到了至少2倍的Cr。为了在拟议系统中实现自动化AF诊断/筛查,开发了基于重新系统的AF检测器。对于2017年Physionet CINC挑战的ECG记录,该AF探测器获得了平均测试F1 = 85.10%和最佳测试F1 = 87.31%,表现优于最先进。
translated by 谷歌翻译
Autonomous vehicles must often contend with conflicting planning requirements, e.g., safety and comfort could be at odds with each other if avoiding a collision calls for slamming the brakes. To resolve such conflicts, assigning importance ranking to rules (i.e., imposing a rule hierarchy) has been proposed, which, in turn, induces rankings on trajectories based on the importance of the rules they satisfy. On one hand, imposing rule hierarchies can enhance interpretability, but introduce combinatorial complexity to planning; while on the other hand, differentiable reward structures can be leveraged by modern gradient-based optimization tools, but are less interpretable and unintuitive to tune. In this paper, we present an approach to equivalently express rule hierarchies as differentiable reward structures amenable to modern gradient-based optimizers, thereby, achieving the best of both worlds. We achieve this by formulating rank-preserving reward functions that are monotonic in the rank of the trajectories induced by the rule hierarchy; i.e., higher ranked trajectories receive higher reward. Equipped with a rule hierarchy and its corresponding rank-preserving reward function, we develop a two-stage planner that can efficiently resolve conflicting planning requirements. We demonstrate that our approach can generate motion plans in ~7-10 Hz for various challenging road navigation and intersection negotiation scenarios.
translated by 谷歌翻译
Eliminating ghosting artifacts due to moving objects is a challenging problem in high dynamic range (HDR) imaging. In this letter, we present a hybrid model consisting of a convolutional encoder and a Transformer decoder to generate ghost-free HDR images. In the encoder, a context aggregation network and non-local attention block are adopted to optimize multi-scale features and capture both global and local dependencies of multiple low dynamic range (LDR) images. The decoder based on Swin Transformer is utilized to improve the reconstruction capability of the proposed model. Motivated by the phenomenal difference between the presence and absence of artifacts under the field of structure tensor (ST), we integrate the ST information of LDR images as auxiliary inputs of the network and use ST loss to further constrain artifacts. Different from previous approaches, our network is capable of processing an arbitrary number of input LDR images. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method by comparing it with existing state-of-the-art HDR deghosting models. Codes are available at https://github.com/pandayuanyu/HSTHdr.
translated by 谷歌翻译
With the drive to create a decentralized digital economy, Web 3.0 has become a cornerstone of digital transformation, developed on the basis of computing-force networking, distributed data storage, and blockchain. With the rapid realization of quantum devices, Web 3.0 is being developed in parallel with the deployment of quantum cloud computing and quantum Internet. In this regard, quantum computing first disrupts the original cryptographic systems that protect data security while reshaping modern cryptography with the advantages of quantum computing and communication. Therefore, in this paper, we introduce a quantum blockchain-driven Web 3.0 framework that provides information-theoretic security for decentralized data transferring and payment transactions. First, we present the framework of quantum blockchain-driven Web 3.0 with future-proof security during the transmission of data and transaction information. Next, we discuss the potential applications and challenges of implementing quantum blockchain in Web 3.0. Finally, we describe a use case for quantum non-fungible tokens (NFTs) and propose a quantum deep learning-based optimal auction for NFT trading to maximize the achievable revenue for sufficient liquidity in Web 3.0. In this way, the proposed framework can achieve proven security and sustainability for the next-generation decentralized digital society.
translated by 谷歌翻译
Data depth, introduced by Tukey (1975), is an important tool in data science, robust statistics, and computational geometry. One chief barrier to its broader practical utility is that many common measures of depth are computationally intensive, requiring on the order of $n^d$ operations to exactly compute the depth of a single point within a data set of $n$ points in $d$-dimensional space. Often however, we are not directly interested in the absolute depths of the points, but rather in their \textit{relative ordering}. For example, we may want to find the most central point in a data set (a generalized median), or to identify and remove all outliers (points on the fringe of the data set with low depth). With this observation, we develop a novel and instance-adaptive algorithm for adaptive data depth computation by reducing the problem of exactly computing $n$ depths to an $n$-armed stochastic multi-armed bandit problem which we can efficiently solve. We focus our exposition on simplicial depth, developed by \citet{liu1990notion}, which has emerged as a promising notion of depth due to its interpretability and asymptotic properties. We provide general instance-dependent theoretical guarantees for our proposed algorithms, which readily extend to many other common measures of data depth including majority depth, Oja depth, and likelihood depth. When specialized to the case where the gaps in the data follow a power law distribution with parameter $\alpha<2$, we show that we can reduce the complexity of identifying the deepest point in the data set (the simplicial median) from $O(n^d)$ to $\tilde{O}(n^{d-(d-1)\alpha/2})$, where $\tilde{O}$ suppresses logarithmic factors. We corroborate our theoretical results with numerical experiments on synthetic data, showing the practical utility of our proposed methods.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Providing accurate estimated time of package delivery on users' purchasing pages for e-commerce platforms is of great importance to their purchasing decisions and post-purchase experiences. Although this problem shares some common issues with the conventional estimated time of arrival (ETA), it is more challenging with the following aspects: 1) Inductive inference. Models are required to predict ETA for orders with unseen retailers and addresses; 2) High-order interaction of order semantic information. Apart from the spatio-temporal features, the estimated time also varies greatly with other factors, such as the packaging efficiency of retailers, as well as the high-order interaction of these factors. In this paper, we propose an inductive graph transformer (IGT) that leverages raw feature information and structural graph data to estimate package delivery time. Different from previous graph transformer architectures, IGT adopts a decoupled pipeline and trains transformer as a regression function that can capture the multiplex information from both raw feature and dense embeddings encoded by a graph neural network (GNN). In addition, we further simplify the GNN structure by removing its non-linear activation and the learnable linear transformation matrix. The reduced parameter search space and linear information propagation in the simplified GNN enable the IGT to be applied in large-scale industrial scenarios. Experiments on real-world logistics datasets show that our proposed model can significantly outperform the state-of-the-art methods on estimation of delivery time. The source code is available at: https://github.com/enoche/IGT-WSDM23.
translated by 谷歌翻译
由于选择偏差,观察数据估算平均治疗效果(ATE)是有挑战性的。现有作品主要以两种方式应对这一挑战。一些研究人员建议构建满足正交条件的分数函数,该函数确保已建立的估计量“正交”更加健壮。其他人探索表示模型,以实现治疗组和受控群体之间的平衡表示。但是,现有研究未能进行1)在表示空间中歧视受控单元以避免过度平衡的问题; 2)充分利用“正交信息”。在本文中,我们提出了一个基于最新协变量平衡表示方法和正交机器学习理论的中等平衡的表示学习(MBRL)框架。该框架可保护表示形式免于通过多任务学习过度平衡。同时,MBRL将噪声正交性信息纳入培训和验证阶段,以实现更好的ATE估计。与现有的最新方法相比,基准和模拟数据集的全面实验表明,我们方法对治疗效应估计的优越性和鲁棒性。
translated by 谷歌翻译
经济学和医疗保健方面的许多实际决策问题寻求从观察数据中估算平均治疗效果(ATE)。双重/辩护的机器学习(DML)是观察性研究中估计吃量的普遍方法之一。但是,DML估计器可能会遇到错误的问题,甚至在倾向分数被弄错或非常接近0或1时进行极端估计。现有文献从理论的角度解决了这个问题。在本文中,我们提出了一种健壮的因果学习(RCL)方法,以抵消DML估计量的缺陷。从理论上讲,RCL估计量i)与DML估计器一样一致且双重稳健,ii)可以摆脱错误混合问题。从经验上讲,全面的实验表明,i)RCL估计器比DML估计器给出了因果参数的稳定估计,ii)RCL估计器在模拟和基准标准数据集上应用不同的机器学习模型时,RCL估计器优于传统估计器及其变体。 。
translated by 谷歌翻译
肌肉骨骼和神经系统疾病是老年人行走问题的最常见原因,它们通常导致生活质量降低。分析步行运动数据手动需要训练有素的专业人员,并且评估可能并不总是客观的。为了促进早期诊断,最近基于深度学习的方法显示了自动分析的有希望的结果,这些方法可以发现传统的机器学习方法中未发现的模式。我们观察到,现有工作主要应用于单个联合特征,例如时间序列的联合职位。由于发现了诸如通常较小规模的医疗数据集的脚之间的距离(即步幅宽度)之类的挑战,因此这些方法通常是优选的。结果,我们提出了一种解决方案,该解决方案明确地将单个关节特征和关节间特征作为输入,从而使系统免于从小数据中发现更复杂的功能。由于两种特征的独特性质,我们引入了一个两流框架,其中一个流从关节位置的时间序列中学习,另一个从相对关节位移的时间序列中学习。我们进一步开发了一个中层融合模块,以将发现的两个流中发现的模式结合起来进行诊断,从而导致数据互补表示,以获得更好的预测性能。我们使用3D骨架运动的基准数据集涉及45例肌肉骨骼和神经系统疾病的患者,并实现95.56%的预测准确性,效果优于最先进的方法,从而验证了我们的系统。
translated by 谷歌翻译